👨🏻‍🚀

Vector Spaces

4.1 Vector Spaces and Subspaces

Vector Space

A nonempty set VVof vectors satisfying the following conditions for u,vVu,v \in V:

u+vVcuV0Vu+v \in V\\ cu \in V\\ \vec 0 \in V

Example: The set of polynomials PnP_n of degree n0n\geq 0

p(t)=a0t0+a1t1+a2t2++antnp(t) = a_0t^0 + a_1t^1 + a_2t^2 + \dots + a_nt^n

This set has the zero polynomial

p(0)Pnai=0p(0) \in P_n\\ a_i = 0

It is closed under vector addition

(p+q)(t)=p(t)+q(t)P(p+q)(t) = p(t) + q(t) \in P

and scalar multiplication

(cp)(t)=cp(t)(cp)(t) = cp(t)

Subspace

HH is a subspace of VV if it satisfies:

Example: {0}V\{\vec 0 \} \sub V

The set consisting of only the zero vector in a vector space VV is a subspace of VV, called the zero subspace.

Theorem 1

If v1,,vpv_1,\dots,v_p are in a vector space VV, then Span{v1,,vp}\text{Span}\{v_1,\dots,v_p\} is a subspace of VV

Example: Let H=Span{v1,v2}H = \text{Span}\{v_1,v_2\}

We know HH is a subspace of VV by Theorem 1.

Example: R2\R^2 is not a subspace of R3\R^3 because R3\R^3 is the set of 33-dimensional vectors and R2\R^2 is not even a subset of R3\R^3.

[st]⊄[stu]\begin{bmatrix} s\\ t \end{bmatrix} \not\subset \begin{bmatrix} s\\ t\\ u \end{bmatrix}

Example: For what values of hh will y=[43h]y= \begin{bmatrix} -4\\3\\h \end{bmatrix} be in the subspace of R3\R^3 spanned by the columns of [153141270]\begin{bmatrix} 1 & 5 & -3\\ -1 & -4 & 1\\ -2 & -7 & 0\\ \end{bmatrix}?

This is equivalent to asking for what values of hh there exists a linear combination of the vectors that equals to yy, which we know is equivalent to the existence of a solution xx to the linear equation Ax=yAx=y. So we just row reduce the augmented matrix [Ab][A|b] and find that the system is consistent for h=5h=5.

Questions

Let WW be the union of the first and third quadrants in the xyxy-plane. W={[xy]:xy0}W= \left\{ \begin{bmatrix} x\\ y \end{bmatrix} : xy \geq 0 \right\}

a. Is cucu in WW?

Yes.

cu=[cxcy]=c2xy0cu = \begin{bmatrix}cx\\cy\end{bmatrix} = c^2xy \geq 0

b. Is WW a vector space?

No. It’s not closed under vector addition.

Is the following set a subspace of PnP^n: All polynomials of the form p(t)=a+t2p(t) = a +t^2, aRa\in\R?

We know that a subspace HPnH\sub P_n must satisfy the following conditions:

We find that the zero vector 0Pn0 \in P_n is not in HH if a0a \neq 0.

Is the following set a subspace of PnP_n: All polynomials in PnP_n such that p(0)=0p(0)=0?

This is the zero vector 0Pn\vec 0 \in P_n, which we know to be a subset of PnP_n

Let WWbe the set of all vectors of the form [s+3tst2st4t]\begin{bmatrix} s +3t\\ s-t\\ 2s-t\\ 4t \end{bmatrix}. Show that WW is a subspace of R4\R^4.

We can break this definition into the span of two vectors

s[1120]+t[3114]s\begin{bmatrix} 1\\ 1\\ 2\\ 0 \end{bmatrix} + t\begin{bmatrix} 3\\ -1\\ -1\\ 4 \end{bmatrix}

By Theorem 1, we know that the span of vectors from VVis a subspace of VV

Span{v1,,vp}V\text{Span}\{v_1,\dots,v_p\} \sub V

Is W=[a+1a6b2b+a]W=\begin{bmatrix} -a + 1\\ a-6b\\ 2b+a \end{bmatrix} a vector space?

First, we express WW as the span of vectors

W=a[111]+b[062]+[100]W = a \begin{bmatrix} -1\\1\\1 \end{bmatrix} + b \begin{bmatrix} 0\\-6\\2 \end{bmatrix} + \begin{bmatrix} 1\\0\\0 \end{bmatrix}

Now we check if this linear combination satisfies the three condition of vector spaces. Firstly, we find if the zero vector is in WW. This is equivalent to asking if this transposed hyperplane goes through the origin such that au+bv+(1,0,0)=0au+bv + (1,0,0) = \vec 0. We find that the system au+bv=(1,0,0)au + bv = -(1,0,0) does not have a solution, so the zero vector is not in WW, thus it is not a vector space.

Let FF be a fixed 3×23\times 2 matrix, and let HH be the set of all matrices AA in M2×4M_{2\times 4} with the property that FA=0R3×4FA= \bold 0 \in \R^{3\times 4}. Determine if HH is a subspace of M2×4M_{2\times 4}.

HH is the null space of FF, which makes it a subspace by Theorem 2.

4.2 Null Spaces, Column Spaces, Row Spaces, and Linear Transformations

Null Space (Kernel)

The null space of a matrix is the set of solutions xx to the homogenous system Ax=0Ax=\vec0

NulA={x:Ax=0}\text{Nul}{A} = \{x:Ax=\vec0\}

Theorem 2

The null space of an m×nm \times n matrix AA is a subspace of Rn\R^n (dimA\dim{A}).

Example: Let HH be the set of all vectors in R4\R^4 whose coordinates satisfy the equations a2b+5c=da - 2b +5c = d and ca=bc-a = b. Show that HH is a subspace of R4\R^4.

We can actually rearrange this system of equations like so:

a2b+5cd=0ab+c=0a -2b + 5c -d = 0\\ -a -b +c = 0

Which is now the set of solutions to the homogeneous system Ax=0Ax=0, which we know is a subspace of nn by Theorem 2.

Column Space (Range)

The column space of an m×nm \times n matrix AA, ColA\text{Col}{A}, is the set of all linear combinations of the columns of AA, which is a subspace of Rm\R^m

ColA=Span{a1,,an}={AxxRn}\text{Col}{A} = \text{Span}\{a_1,\dots,a_n\} = \{Ax | x\in\R^n\}

Theorem 3

The column space of an m×nm\times n matrix AA is a subspace of Rm\R^m. The Span of pp vectors vv from VV is automatically a subspace of VV.

Row Space

The set of all linear combinations of the row vectors is called the row space of AA, RowA\text{Row}{A}, which is a subspace of Rn\R^n

Questions

Find an explicit definition of NulA\text{Nul}{A}. A=[16400020]A = \begin{bmatrix}1 & -6 & 4 & 0\\0 & 0 & 2 & 0\end{bmatrix}.

NulA\text{Nul}{A} is the set of solutions to the equation Ax=0Ax=0. We can express this solution set in general form. First, we row reduce into RREF and get the following matrix:

A=[16000010]A = \begin{bmatrix}1 & -6 & 0 & 0\\0 & 0 & 1 & 0\end{bmatrix}

Which gives us the following general solution in terms of the free variables x2x_2 and x4x_4

x=[x1x2x3x4]=[6x2x200]+[000x4]\vec x = \begin{bmatrix} x_1\\ x_2\\ x_3\\ x_4 \end{bmatrix} = \begin{bmatrix} 6x_2\\ x_2\\ 0\\ 0 \end{bmatrix} + \begin{bmatrix} 0\\ 0\\ 0\\ x_4 \end{bmatrix}

Thus, the null space is the span of the two vectors, and the nullity is 22.

NulA=Span([6100],[0001])\text{Nul}{A} = \text{Span}\left( \begin{bmatrix} 6\\ 1\\ 0\\ 0 \end{bmatrix}, \begin{bmatrix} 0\\ 0\\ 0\\ 1 \end{bmatrix} \right)

Either show that WW is a vector space, or prove the contrary.

W={[abcd]:a+3b=c,b+c+a=d}W = \left\{ \begin{bmatrix} a\\ b\\ c\\ d \end{bmatrix}: \begin{matrix} a+3b=c,\\ b+c+a=d \end{matrix} \right\}

We can rearrange the two equations into a homogeneous system:

a+3bc=0a+b+c+d=0a + 3b -c = 0\\ a + b +c +d = 0

WW is the set of all solutions to a homogeneous system, thus it is a subspace of R4\R^4 by Theorem 2.

Either show that WW is a vector space, or prove the contrary.

W={[b5d2b2d+1d]:b,dR}W = \left\{ \begin{bmatrix} b-5d\\ 2b\\ 2d+1\\ d \end{bmatrix}: b,d \in \R \right\}

We can rewrite WW as a linear combination:

W=b[1200]+d[5021]+[0010]W = b\begin{bmatrix} 1\\ 2\\ 0\\ 0 \end{bmatrix} + d \begin{bmatrix} 5\\ 0\\ 2\\ 1 \end{bmatrix} + \begin{bmatrix} 0\\ 0\\ 1\\ 0 \end{bmatrix}

WW does not include the origin, thus it is not a subspace of R4\R^4.

4.3 Bases

Theorem 4

An indexed set {v1,,vp}\{v_1,\dots,v_p\} of two or more vectors, with vq0v_q \neq 0, is linearly dependent if and only if some vjv_j is a linear combination of the preceding vectors.

Basis

A set of vectors B\mathcal{B} in VV is a basis for HH if the following two conditions are satisfied

Theorem 5: The Spanning Set Theorem

Let S={v1,,vp}S=\{v_1,\dots,v_p\} be a set in a vector space VV, and let H=Span{v1,,vp}H=\text{Span}\{v_1,\dots,v_p\}

Theorem 6

The pivot columns of a matrix AA form a basis for ColA\text{Col}{A}.

Theorem 7

If two matrices AA and BB are row equivalent, then their row spaces are the same. If BB is in echelon form, the nonzero rows of BB form a basis for the row space of AA and BB.

Example: Find a basis for the row space of AA

A=[1402131215528132520288]A = \begin{bmatrix} 1 & 4 & 0 & 2 & -1\\ 3 & 12 & 1 & 5 & 5\\ 2 & 8 & 1 & 3 & 2\\ 5 & 20 & 2 & 8 & 8 \end{bmatrix}

We find the basis for RowA\text{Row}A by row-reducing to RREF. The pivot columns of AA form the basis for RowA\text{Row}A.

Questions

Is the following set a basis for R3\R^3? S={[123],[456]}S = \left\{ \begin{bmatrix} 1\\2\\-3 \end{bmatrix}, \begin{bmatrix} -4\\-5\\6 \end{bmatrix} \right\}

We know that a set SS must be linearly independent and span HH to be a basis for HH. A set of two vectors obviously can’t span R3\R^3.

Example: Find bases for the null space and column space for A=[105142162202819]A = \begin{bmatrix} 1 & 0 & -5 & 1 & 4\\ -2 & 1 & 6 & -2 & -2\\ 0 & 2 & -8 & 1 & 9 \end{bmatrix}

First, we row reduce to RREF:

[105070140600013]\sim \begin{bmatrix} 1 & 0 & -5 & 0 & 7\\ 0 & 1 & -4 & 0 & 6\\ 0 & 0 & 0 & 1 & -3 \end{bmatrix}

Columns 1,2,41,2,4 are the pivot columns thus

ColA=Span{[120],[012],[121]}\text{Col}A = \text{Span}\left\{ \begin{bmatrix} 1\\-2\\0 \end{bmatrix}, \begin{bmatrix} 0\\1\\2 \end{bmatrix}, \begin{bmatrix} 1\\-2\\1 \end{bmatrix} \right\}

And we write NulA\text{Nul}A as the general form solution to Ax=0Ax=0 as a linear combination of the free variables x3,x5x_3,x_5

x=[x1x2x3x4x5]=[5x37x54x36x5x33x5x5]=x3[54100]+x5[76031]NulA=Span{[54100],[76031]}\vec x = \begin{bmatrix} x_1 \\ x_2\\x_3 \\ x_4 \\x_5 \end{bmatrix} = \begin{bmatrix} 5x_3 - 7x_5\\4x_3 - 6x_5\\ x_3\\ 3x_5 \\ x_5 \end{bmatrix} = x_3\begin{bmatrix} 5 \\ 4\\1\\ 0\\0 \end{bmatrix} + x_5\begin{bmatrix} -7\\-6\\0\\3\\1 \end{bmatrix}\\ \text{Nul}A = \text{Span} \left\{ \begin{bmatrix} 5 \\ 4\\1\\ 0\\0 \end{bmatrix}, \begin{bmatrix} -7\\-6\\0\\3\\1 \end{bmatrix} \right\}

True or False: If BB is an echelon form of a matrix AA, then the pivot columns of BB form a basis for ColA\text{Col}A.

False. The pivot columns of the reduced matrix correspond to the pivot columns in the original set which form the basis

4.4 Coordinate Systems

Theorem 8: The Unique Representation Theorem

Let B={b1,,bn}\mathcal{B} = \{b_1,\dots,b_n\} be a basis for a vector space VV. Then for each xx in VV, there exists a unique set of scalars c1,,cnc_1,\dots,c_n such that

x=c1b1++cnbnx = c_1b_1 + \dots + c_nb_n

where the coordinates of xx relative to basis B\mathcal{B}

[x]B=[c1cn][x]_\mathcal{B} = \begin{bmatrix} c_1\\ \vdots\\c_n \end{bmatrix}

Using these coefficients [x]B[x]_\mathcal{B}, we can express the unique point xx as a linear combination of the basis vectors in B\mathcal{B}:

x=B[x]B=[b1bn][c1cn]=c1b1++cnbnx = \mathcal{B}[x]_\mathcal{B} = [b_1 \dotsb b_n] \begin{bmatrix} c_1\\ \vdots\\ c_n \end{bmatrix}= c_1b_1 + \dots +c_nb_n

Example: Let B={[10],[23]}\mathcal{B} =\{ \begin{bmatrix} 1\\0 \end{bmatrix}, \begin{bmatrix} -2\\3 \end{bmatrix} \} be a basis for R2\R^2, [x]B=[23][x]_\mathcal{B} = \begin{bmatrix} -2\\3 \end{bmatrix}. Find xx.

The vector xxxxB\mathcal{B}xx[x]B[x]_\mathcal{B}

x=B[x]B=[1203][23]=[89]x = \mathcal{B}[x]_\mathcal{B} = \begin{bmatrix} 1 & -2\\ 0 & 3 \end{bmatrix} \begin{bmatrix} -2\\3 \end{bmatrix} = \begin{bmatrix} -8\\9 \end{bmatrix}

Example: Let B={[21],[11]}\mathcal{B} = \left\{ \begin{bmatrix}2\\1\end{bmatrix}, \begin{bmatrix}-1\\1\end{bmatrix} \right\}, x=[45]x=\begin{bmatrix}4\\5\end{bmatrix}. Find the coordinate vector [x]B[x]_\mathcal{B}

We can solve the equation x=B[x]Bx = \mathcal{B}[x]_\mathcal{B}[Bx][\mathcal{B}|x]

Theorem 9

Let B={b1,,bn}\mathcal{B}=\{b_1,\dots,b_n\} be a basis for a vector space VV. Then the coordinate mapping x[x]Bx \mapsto [x]_\mathcal{B} is a one-to-one linear transformation from VV onto Rn\R^n.

Isomorphism

Two vector spaces VV and WW are isomorphic when ColV=ColW\text{Col}V = \text{Col}W. For example, P3P_3 is isomorphic to R4\R^4.

Questions

The set B={1t2,tt2,22t+t2}\mathcal{B} = \{1-t^2,t-t^2,2-2t+t^2\} is a basis for P2P_2. Find the coordinate vector of p(t)=3+t6t2p(t) = 3+t-6t^2 relative to B\mathcal{B}.

Solve the system x=B[x]Bx = \mathcal{B}[x]_\mathcal{B} for [x]B[x]_\mathcal{B} by reducing the augmented matrix [Bx][\mathcal{B}|x].

True or False? If B\mathcal{B} is the standard basis for Rn\R^n, then the B\mathcal{B}-coordinate vector of an xx in Rn\R^n is xx itself.

True. The standard basis for Rn\R^n is [e1,,en][e_1,\dots,e_n], thus B=I\mathcal{B} = I.

Let p1(t)=1+t2,p2(t)=t3t2,p3(t)=1+t3t2p_1(t) = 1 + t^2, p_2(t) = t-3t^2, p_3(t) = 1 + t - 3t^2.

a. Use coordinate vectors to show that these polynomials form a basis for P2P_2

Rewriting the given polynomials as vectors in P2P_2:

[1tt2][101],[013],[113]\begin{bmatrix} 1\\t\\t^2 \end{bmatrix} \rightarrow \begin{bmatrix} 1\\0\\1 \end{bmatrix}, \begin{bmatrix} 0\\1\\-3 \end{bmatrix}, \begin{bmatrix} 1\\1\\-3 \end{bmatrix}

We find that these 33 vectors are linearly independent and span R3\R^3, which is isomorphic to P2P_2. Thus, the vectors form a basis of P2P_2 by Theorem 13.

b. Consider the basis B={p1,p2,p3}\mathcal{B} = \{p_1,p_2,p_3\} for P2P_2. Find qq in P2P_2, given that [q]B=[112][q]_\mathcal{B} = \begin{bmatrix}-1\\1\\2\end{bmatrix}

q=B[q]Bq=1p1+1p2+2p3q = \mathcal{B}[q]_\mathcal{B}\\ q= -1p_1+1p_2+2p_3

4.5 The Dimension of a Vector Space

Theorem 10

If a vector space VV has a basis B={b1,,bn}\mathcal{B}=\{b_1,\dots,b_n\}, then any set in VV containing more than nn vectors must be linearly dependent

Theorem 11

If a vector space VV has a basis of nn vectors, then every basis of VV must consist of exactly nn vectors.

Dimension

The dimension of a vector space, dimV\dim V, is the number of vectors in a basis for VV. dim{0}=0\dim\{\vec 0\}=0

Example: Find the dimension of the subspace H={[a3b+6c5a+4db2cd5d]}H = \left\{ \begin{bmatrix} a-3b+6c\\ 5a+4d\\ b-2c-d\\ 5d \end{bmatrix} \right\}

First, we rewrite this set of linear equations as a linear combination of vectors

v1=[1500],v2=[3010],v3=[6020],v4=[0415]v_1 = \begin{bmatrix} 1\\5\\0\\0 \end{bmatrix}, v_2 = \begin{bmatrix} -3\\0\\1\\0 \end{bmatrix}, v_3= \begin{bmatrix} 6\\0\\-2\\0 \end{bmatrix}, v_4= \begin{bmatrix} 0\\4\\-1\\5 \end{bmatrix}

Now we row-reduce to RREF and find that the pivot columns correspond to the basis vectors which define dimV\dim V

B={v1,v2,v4}dimV=3\mathcal{B} = \{v_1,v_2,v_4\}\\ \dim{V} = 3

Theorem 12

Let HH be a subspace of a finite-dimensional vector space VV. Any linearly independent set in HH can be expanded to a basis for HH, where dimHdimV\dim{H} \leq \dim{V}

Theorem 13: The Basis Theorem

Let VV be a pp-dimensional vector space, p1p\geq 1. Any linearly independent set of exactly pp elements in VV is automatically a basis for VV. Any set of exactly pp elements that spans VV is automatically a basis for VV

Rank and Nullity

The rank of an m×nm \times n matrix AA is the dimension of the column space (number of pivot columns) and the nullity of AA is the dimension of the null space (number of free variables).

Theorem 14: The Rank Theorem

RankA+NulA=number of columns in A\text{Rank}A + \text{Nul}A = \text{number of columns in }A

Thus, we can add the following properties to invertible matrices:

Questions

Find a basis for WW and state its dimension

W={[3a+6bc6a2b2c9a+5b+3c3a+b+c]}W = \left\{ \begin{bmatrix} 3a+6b-c\\ 6a-2b-2c\\ -9a+5b+3c\\ -3a+b+c \end{bmatrix} \right\}

Firstly, we can express this set of linear equations as a linear combination of vectors

W=a[3693]+b[6251]+c[1231]W = a \begin{bmatrix} 3\\6\\-9\\-3 \end{bmatrix} + b \begin{bmatrix} 6\\-2\\5\\1 \end{bmatrix} + c \begin{bmatrix} -1\\-2\\3\\1 \end{bmatrix}

Now we row-reduce these columns as one matrix:

[101/3010000000]\sim \begin{bmatrix} 1 & 0 & -1/3\\ 0 & 1 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{bmatrix}

We find that there are 22 pivot columns 11 free variable. Thus the first 22 columns form the basis.

dimA=2NulA=1\dim A = 2\\ \text{Nul}A = 1

Find a basis for WW and state its dimension.

W={(a,b,c,d):a3b+c=0}W =\{(a,b,c,d): a -3b+c=0\}

WW is defined as one linear equation with 44 variables. This corresponds to a homogeneous system with one row containing 11 pivot column and 33 free variables.

[1310]\begin{bmatrix} 1 & -3 & 1 & 0 \end{bmatrix}\\

We can write the solution to this homogeneous equation in general form

x=[abcd]=[3bcbcd]=b[3100]+c[1010]+d[0001]\vec x = \begin{bmatrix} a\\b\\c\\d \end{bmatrix} = \begin{bmatrix} 3b-c\\ b\\ c\\ d \end{bmatrix} = b \begin{bmatrix} 3\\1\\0\\0 \end{bmatrix} + c \begin{bmatrix} -1\\0\\1\\0\end{bmatrix} + d \begin{bmatrix} 0\\0\\0\\1 \end{bmatrix}

The dimension of a homogeneous system is equivalent to the number of free variables (nullity).

Determine the dimensions of NulA,ColA,RowA\text{Nul}A, \text{Col}A, \text{Row}A

A=[134216001370000143000000]A = \begin{bmatrix} 1 & 3 & -4 & 2 & -1 & 6\\ 0 & 0 & 1 & -3 & 7 & 0\\ 0 & 0 & 0 & 1 & 4 & -3\\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}\\

Firstly, we reduce into RREF

[130067240010199000143000000] \sim \begin{bmatrix} 1 & 3 & 0 & 0 & 67 & -24\\ 0 & 0 & 1 & 0 & 19 & -9\\ 0 & 0 & 0 & 1 & 4 & -3\\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}

Where we have 33 pivot columns and 33 free variables. We know that the dimension of a vector space is given by the number of vectors in its basis, given by the number of pivot columns and free variables.

NulA=number of free variables=3ColA=RowA=number of pivot columns=3\text{Nul}A = \text{number of free variables} = 3\\ \text{Col}A = \text{Row}A = \text{number of pivot columns} = 3

True or False? The dimensions of the row space and the column space of AA are the same, even if AA is not square.

True. The dimension of the row and column space corresponds to the number of pivot entries, which is the same across columns and rows.

The first four Laguerre polynomials are 1,1t,24t+t2,618t+9t2t31,1-t,2-4t+t^2, 6-18t+9t^2-t^3. Show that these polynomials form a basis of P3P_3.

First, we write these polynomials as vectors

[1000],[1100],[2410],[61891]\begin{bmatrix} 1 \\0 \\0\\0 \end{bmatrix}, \begin{bmatrix} 1\\-1\\0\\0 \end{bmatrix}, \begin{bmatrix} 2\\-4\\1\\0 \end{bmatrix}, \begin{bmatrix} 6\\-18\\9\\-1 \end{bmatrix}

Given that P3P_3 is isomorphic to R4\R^4, by the Basis Theorem we know that nn linearly independent vectors form a basis for Rn\R^n. Thus these 44 vectors automatically form a basis because they are linearly independent.

4.6 Change of Basis

Change-of-Coordinates Matrix

A change-of-coordinates matrix PCBP_{\mathcal{C} \leftarrow \mathcal{B}} is used to express the set of basis vectors in B\mathcal{B} as a linear combination of the basis vectors in C\mathcal{C} such that

CPCB=B\mathcal{C}P_{\mathcal{C} \leftarrow \mathcal{B}} = \mathcal{B}

Alternatively, this matrix can be used to take the coefficients [x]B[x]_\mathcal{B} which represent the vector xx as a linear combination of the basis vectors in B\mathcal{B} and transform it into the coefficients [x]C[x]_\mathcal{C}.

[x]C=PCB[x]B[x]_\mathcal{C} = P_{\mathcal{C} \leftarrow \mathcal{B}} [x]_\mathcal{B}

Theorem 15

Let B={b1,,bn}\mathcal{B}=\{b_1,\dots,b_n\} and C={c1,,cn}\mathcal{C}=\{c_1,\dots,c_n\} be bases of VV. Then there is a unique n×nn\times n change-of-coordinates matrix PCB[x]BP_{\mathcal{C} \leftarrow \mathcal{B}}[x]_\mathcal{B} such that

[x]C=PCB[x]B[x]_\mathcal{C} = P_{\mathcal{C} \leftarrow \mathcal{B}}[x]_\mathcal{B}

where the change-of-coordinates matrix from B\mathcal{B} to C\mathcal{C} contains the coefficients to express each basis vector of B\mathcal{B} as a linear combination of the basis vectors in C\mathcal{C}:

PCB=[[b1]C[b2]C[bn]C]P_{\mathcal{C} \leftarrow \mathcal{B}} = \begin{bmatrix} [b_1]_\mathcal{C} & [b_2]_\mathcal{C} & \dots & [b_n]_\mathcal{C} \end{bmatrix}

Think of the column vectors [bi]C[b_i]_\mathcal{C} in PCBP_{\mathcal{C} \leftarrow \mathcal{B}} as the coeffients that are used to express each basis vectors bib_i in B\mathcal{B} as a linear combination of basis vectors cic_i in C\mathcal{C}.

We can say that applying change-of-coordinates mapping directly to the basis vectors of C\mathcal{C} transforms it into B\mathcal{B}

CPCB=B[c1cn][[b1]C[b2]C[bn]C]=[b1bn]\mathcal{C}P_{\mathcal{C} \leftarrow \mathcal{B}} = \mathcal{B}\\ [c_1\dots c_n]\begin{bmatrix} [b_1]_\mathcal{C} & [b_2]_\mathcal{C} & \dots & [b_n]_\mathcal{C} \end{bmatrix} = [b_1\dots b_n]

This allows us to just directly solve for PCBP_{\mathcal{C} \leftarrow \mathcal{B}} via row reduction:

[CB][IPCB][\mathcal{C}|\mathcal{B}]\sim [I|P_{\mathcal{C} \leftarrow \mathcal{B}}]

Example: Let b1=[91],b2=[51],c1=[14],c2=[35]b_1 = \begin{bmatrix} -9\\1 \end{bmatrix}, b_2 = \begin{bmatrix} -5\\-1 \end{bmatrix}, c_1 = \begin{bmatrix} 1\\-4 \end{bmatrix}, c_2 = \begin{bmatrix} 3\\-5 \end{bmatrix}. Find the change-of-coordinates matrix PCBP_{\mathcal{C} \leftarrow \mathcal{B}}

We solve the system CPCB=B\mathcal{C}P_{\mathcal{C} \leftarrow \mathcal{B}} = \mathcal{B} via row reduction:

[CB][IPCB][\mathcal{C}|\mathcal{B}]\sim [I|P_{\mathcal{C} \leftarrow \mathcal{B}}]
[13459511][10016453]\left[ \begin{matrix} 1 & 3\\ -4 & -5 \end{matrix} \left| \, \begin{matrix} -9 &-5\\ 1 & -1 \end{matrix} \right. \right] \sim \left[ \begin{matrix} 1 & 0\\ 0 & 1 \end{matrix} \left| \, \begin{matrix} 6 & 4\\ -5 & -3 \end{matrix} \right. \right]

Thus:

PCB=[[b1]C[b2]C]=[6453]P_{\mathcal{C} \leftarrow \mathcal{B}} = \begin{bmatrix} [b_1]_\mathcal{C} & [b_2]_\mathcal{C} \end{bmatrix} = \begin{bmatrix} 6 & 4\\ -5 & -3 \end{bmatrix}

Questions

Let B={b1.b2}\mathcal{B}=\{b_1.b_2\} and C={c1.c2}\mathcal{C}=\{c_1.c_2\} be bases for a vector space VV, and let b1=c1+4c2,b2=5c13c2b_1 = -c_1+4c_2, b_2=5c_1-3c_2. Find the change-of-coordinates matrix PCBP_{\mathcal{C} \leftarrow \mathcal{B}}. Find [x]C[x]_\mathcal{C} for x=5b1+3b2x=5b_1+3b_2

We are given the definitions of the basis vector of B\mathcal{B} expressed using the basis vectors of C\mathcal{C}, which are the vectors [bi]C[b_i]_\mathcal{C} forming the change-of-coordinates matrix PCBP_{\mathcal{C} \leftarrow \mathcal{B}}

CPCB=B[c1c2]PCB=[c1+4c25c13c2][c1c2][1543]=[c1+4c25c13c2]\mathcal{C}P_{\mathcal{C} \leftarrow \mathcal{B}} = \mathcal{B} \\ \begin{bmatrix} c_1 & c_2 \end{bmatrix} P_{\mathcal{C} \leftarrow \mathcal{B}} = \begin{bmatrix} -c_1+4c_2 & 5c_1-3c_2 \end{bmatrix} \\ \begin{bmatrix} c_1 & c_2 \end{bmatrix} \begin{bmatrix} -1 & 5\\ 4 &-3 \end{bmatrix} = \begin{bmatrix} -c_1+4c_2 & 5c_1-3c_2 \end{bmatrix}
PCB=[1543]P_{\mathcal{C} \leftarrow \mathcal{B}} = \begin{bmatrix} -1 & 5\\ 4 &-3 \end{bmatrix}

We can then use this change-of-coordinates matrix to express the vector xx as a linear combination of the basis vectors C\mathcal{C} instead of the basis vectors B\mathcal{B}.

[x]C=PCB[x]B=[1543][53]=[1011][x]_\mathcal{C} = P_{\mathcal{C} \leftarrow \mathcal{B}}[x]_\mathcal{B} = \begin{bmatrix} -1 & 5\\4&-3 \end{bmatrix} \begin{bmatrix} 5\\3 \end{bmatrix} = \begin{bmatrix} 10\\ 11 \end{bmatrix}

This set of coefficients [x]C[x]_\mathcal{C} allows us to express xx as a linear combination of the basis vectors [x]C[x]_\mathcal{C}

C[x]C=x\mathcal{C} [x]_\mathcal{C} = x

Let A={a1,a2,a3}\mathcal{A} = \{a_1,a_2,a_3\} and D={d1,d2,d3}\mathcal{D} = \{d_1,d_2,d_3\} be bases for VV, and let P=[[d1]A[d2]A[dn]A]P= \begin{bmatrix} [d_1]_\mathcal{A} & [d_2]_\mathcal{A} & \dots & [d_n]_\mathcal{A} \end{bmatrix}. Write the expression that uses this matrix to transform a vector xx in VV from one basis definition to another.

Each column vector in PP is the coefficient vector [di]A[d_i]_\mathcal{A} to express each basis vector did_i in D\mathcal{D} as a linear combination of the basis vectors in A\mathcal{A} such that

A[di]A=di\mathcal{A}[d_i]_\mathcal{A} = d_i

this is the change-of-coordinate matrix from basis D\mathcal{D} to basis A\mathcal{A}

PAD=[[d1]A[d2]A[dn]A]P_{\mathcal{A} \leftarrow \mathcal{D}} = \begin{bmatrix} [d_1]_\mathcal{A} & [d_2]_\mathcal{A} & \dots & [d_n]_\mathcal{A} \end{bmatrix}

which we can use to get the coefficients [x]D[x]_\mathcal{D} to describe a vector xx as a linear combination of the basis vectors in A\mathcal{A} instead of the basis vectors in D\mathcal{D}, by applying the change-of-coordinate matrix directly to the coefficients [x]D[x]_\mathcal{D}:

[x]A=PAD[x]D[x]_\mathcal{A} = P_{\mathcal{A} \leftarrow \mathcal{D}} [x]_\mathcal{D}

Let D={d1,d2,d3}\mathcal{D} = \{d_1,d_2,d_3\} and F={f1,f2,f3}\mathcal{F} = \{f_1,f_2,f_3\} be bases for VV. Let f1=2d1d2+d3,f2=3d2+d3,f3=3d1+2d3f_1 = 2d_1 - d_2 + d_3, f_2 = 3d_2 + d_3, f_3 = - 3d_1 + 2d_3. Find PDFP_{\mathcal{D} \leftarrow \mathcal{F}}. Find [x]D[x]_\mathcal{D} for x=f12f2+2f3x = f_1-2f_2+2f_3.

We are given the definition of the basis vectors of F\mathcal{F} expressed using the basis vectors of D\mathcal{D}. These vectors [fi]D[f_i]_\mathcal{D} form the change-of-coordinates matrix PDFP_{\mathcal{D} \leftarrow \mathcal{F}}

PDF=[[f1]D[f2]D[fn]D]P_{\mathcal{D} \leftarrow \mathcal{F}} = \begin{bmatrix} [f_1]_\mathcal{D} & [f_2]_\mathcal{D} & \dots & [f_n]_\mathcal{D} \end{bmatrix}\\

Using the given equations:

PDF=[203130112]P_{\mathcal{D} \leftarrow \mathcal{F}} = \begin{bmatrix} 2 & 0 & -3\\ -1 & 3& 0\\ 1 & 1 & 2 \end{bmatrix}\\

To find the coefficients [x]D[x]_\mathcal{D} for x=f12f2+2f3=F[x]Fx = f_1-2f_2+2f_3 = F[x]_\mathcal{F} , we just apply the change-of-coordinates matrix

[x]D=PDF[x]F=1[f1]D2[f2]D+2[fn]D=[468][x]_\mathcal{D} = P_{\mathcal{D} \leftarrow \mathcal{F}}[x]_\mathcal{F}\\ = 1[f_1]_\mathcal{D} -2[f_2]_\mathcal{D} +2[f_n]_\mathcal{D}\\ = \begin{bmatrix}-4 \\ -6 \\ 8\end{bmatrix}

Find PCBP_{\mathcal{C} \leftarrow \mathcal{B}} and PBCP_{\mathcal{B} \leftarrow \mathcal{C}}, given B=[1185]\mathcal{B} = \begin{bmatrix}-1 & 1\\8 & -5\end{bmatrix}and C=[1141]\mathcal{C} = \begin{bmatrix}1 & 1\\4 & 1\end{bmatrix}

We solve the system CPCB=B\mathcal{C}P_{\mathcal{C} \leftarrow \mathcal{B}} = \mathcal{B} by row-reducing [CB][IPCB][\mathcal{C}|\mathcal{B}] \sim [I|P_{\mathcal{C} \leftarrow \mathcal{B}}].

Find the change-of-coordinates matrix from the basis B={13t2,2+t5t2,1+2t}\mathcal{B} = \{1-3t^2,2+t-5t^2,1+2t\} to the standard basis. Then write t2t^2 as a linear combination of the polynomials in B\mathcal{B}.

We want to find the matrix PEBP_{\mathcal{E} \rightarrow \mathcal{B}}, but the standard basis E\mathcal{E} is really just the identity matrix II so:

PEB=BP_{\mathcal{E} \rightarrow \mathcal{B}} = \mathcal{B}

So the change-of-coordinate matrix from B\mathcal{B} to the standard basis is just B\mathcal{B} itself.

PEB=B=[121012350]P_{\mathcal{E} \rightarrow \mathcal{B}} = \mathcal{B} = \begin{bmatrix} 1 & 2 & 1\\ 0 & 1 & 2\\ -3 & -5 & 0 \end{bmatrix}